110 research outputs found
A Full Probabilistic Model for Yes/No Type Crowdsourcing in Multi-Class Classification
Crowdsourcing has become widely used in supervised scenarios where training
sets are scarce and difficult to obtain. Most crowdsourcing models in the
literature assume labelers can provide answers to full questions. In
classification contexts, full questions require a labeler to discern among all
possible classes. Unfortunately, discernment is not always easy in realistic
scenarios. Labelers may not be experts in differentiating all classes. In this
work, we provide a full probabilistic model for a shorter type of queries. Our
shorter queries only require "yes" or "no" responses. Our model estimates a
joint posterior distribution of matrices related to labelers' confusions and
the posterior probability of the class of every object. We developed an
approximate inference approach, using Monte Carlo Sampling and Black Box
Variational Inference, which provides the derivation of the necessary
gradients. We built two realistic crowdsourcing scenarios to test our model.
The first scenario queries for irregular astronomical time-series. The second
scenario relies on the image classification of animals. We achieved results
that are comparable with those of full query crowdsourcing. Furthermore, we
show that modeling labelers' failures plays an important role in estimating
true classes. Finally, we provide the community with two real datasets obtained
from our crowdsourcing experiments. All our code is publicly available.Comment: SIAM International Conference on Data Mining (SDM19), 9 official
pages, 5 supplementary page
The expansion rate of the intermediate Universe in light of Planck
We use cosmology-independent measurements of the expansion history in the
redshift range 0.1 < z <1.2 and compare them with the Cosmic Microwave
Background-derived expansion history predictions. The motivation is to
investigate if the tension between the local (cosmology independent) Hubble
constant H0 value and the Planck-derived H0 is also present at other redshifts.
We conclude that there is no tension between Planck and cosmology
independent-measurements of the Hubble parameter H(z) at 0.1 < z < 1.2 for the
LCDM model (odds of tension are only 1:15, statistically not significant).
Considering extensions of the LCDM model does not improve these odds (actually
makes them worse), thus favouring the simpler model over its extensions. On the
other hand the H(z) data are also not in tension with the local H0 measurements
but the combination of all three data-sets shows a highly significant tension
(odds ~ 1:400). Thus the new data deepen the mystery of the mismatch between
Planck and local H0 measurements, and cannot univocally determine wether it is
an effect localised at a particular redshift. Having said this, we find that
assuming the NGC4258 maser distance as the correct anchor for H0, brings the
odds to comfortable values.
Further, using only the expansion history measurements we constrain, within
the LCDM model, H0 = 68.5 +- 3.5 and Omega_m = 0.32 +- 0.05 without relying on
any CMB prior. We also address the question of how smooth the expansion history
of the universe is given the cosmology independent data and conclude that there
is no evidence for deviations from smoothness on the expansion history, neither
variations with time in the value of the equation of state of dark energy.Comment: Submitted to Physics of the Dark Univers
Planck and the local Universe: quantifying the tension
We use the latest Planck constraints, and in particular constraints on the
derived parameters (Hubble constant and age of the Universe) for the local
universe and compare them with local measurements of the same quantities. We
propose a way to quantify whether cosmological parameters constraints from two
different experiments are in tension or not. Our statistic, T, is an evidence
ratio and therefore can be interpreted with the widely used Jeffrey's scale. We
find that in the framework of the LCDM model, the Planck inferred two
dimensional, joint, posterior distribution for the Hubble constant and age of
the Universe is in "strong" tension with the local measurements; the odds being
~ 1:50. We explore several possibilities for explaining this tension and
examine the consequences both in terms of unknown errors and deviations from
the LCDM model. In some one-parameter LCDM model extensions, tension is reduced
whereas in other extensions, tension is instead increased. In particular, small
total neutrino masses are favored and a total neutrino mass above 0.15 eV makes
the tension "highly significant" (odds ~ 1:150). A consequence of accepting
this interpretation of the tension is that the degenerate neutrino hierarchy is
highly disfavoured by cosmological data and the direct hierarchy is slightly
favored over the inverse.Comment: Submitted to Physics of the Dark Univers
- …